46 research outputs found

    Analysis of cognitive framework and biomedical translation of tissue engineering in otolaryngology

    Get PDF
    Tissue engineering is a relatively recent research area aimed at developing artificial tissues that can restore, maintain, or even improve the anatomical and/or functional integrity of injured tissues. Otolaryngology, as a leading surgical specialty in head and neck surgery, is a candidate for the use of these advanced therapies and medicinal products developed. Nevertheless, a knowledge-based analysis of both areas together is still needed. The dataset was retrieved from the Web of Science database from 1900 to 2020. SciMAT software was used to perform the science mapping analysis and the data for the biomedical translation identification was obtained from the iCite platform. Regarding the analysis of the cognitive structure, we find consolidated research lines, such as the generation of cartilage for use as a graft in reconstructive surgery, reconstruction of microtia, or the closure of perforations of the tympanic membrane. This last research area occupies the most relevant clinical translation with the rest of the areas presenting a lower translational level. In conclusion, Tissue engineering is still in an early translational stage in otolaryngology, otology being the field where most advances have been achieved. Therefore, although otolaryngologists should play an active role in translational research in tissue engineering, greater multidisciplinary efforts are required to promote and encourage the translation of potential clinical applications of tissue engineering for routine clinical use.Spanish State Research Agency through the project PID2019-105381GAI00/ AEI/10.13039/501100011033 (iScience)CTS-115 (Tissue Engineering Research Group, University of Granada) from Junta de Andalucia, Spai

    Latent Dirichlet Allocation (LDA) for improving the topic modeling of the official bulletin of the spanish state (BOE)

    Get PDF
    Since Internet was born most people can access fully free to a lot sources of information. Every day a lot of web pages are created and new content is uploaded and shared. Never in the history the humans has been more informed but also uninformed due the huge amount of information that can be access. When we are looking for something in any search engine the results are too many for reading and filtering one by one. Recommended Systems (RS) was created to help us to discriminate and filter these information according to ours preferences. This contribution analyses the RS of the official agency of publications in Spain (BOE), which is known as "Mi BOE'. The way this RS works was analysed, and all the meta-data of the published documents were analysed in order to know the coverage of the system. The results of our analysis show that more than 89% of the documents cannot be recommended, because they are not well described at the documentary level, some of their key meta-data are empty. So, this contribution proposes a method to label documents automatically based on Latent Dirichlet Allocation (LDA). The results are that using this approach the system could recommend (at a theoretical point of view) more than twice of documents that it now does, 11% vs 23% after applied this approach

    A cloud-based tool for sentiment analysis in reviews about restaurants on TripAdvisor

    Get PDF
    The tourism industry has been promoting its products and services based on the reviews that people often write on travel websites like TripAdvisor.com, Booking.com and other platforms like these. These reviews have a profound effect on the decision making process when evaluating which places to visit, such as which restaurants to book, etc. In this contribution is presented a cloud based software tool for the massive analysis of this social media data (TripAdvisor.com). The main characteristics of the tool developed are: i) the ability to aggregate data obtained from social media; ii) the possibility of carrying out combined analyses of both people and comments; iii) the ability to detect the sense (positive, negative or neutral) in which the comments rotate, quantifying the degree to which they are positive or negative, as well as predicting behaviour patterns from this information; and iv) the ease of doing everything in the same application (data downloading, pre-processing, analysis and visualisation). As a test and validation case, more than 33.500 revisions written in English on restaurants in the Province of Granada (Spain) were analyse

    Discovering Rehabilitation trends in Spain: A bibliometric analysis

    Get PDF
    The main purpose of this study is to offer an overview of the rehabilitation research area in Spain from 1970 to 2018 through a bibliometric analysis. Analysis of performance and a co-word science mapping analysis were conducted to highlight the topics covered. The software tool SciMAT was used to analyse the themes concerning their performance and impact measures. A total of 3,564 documents from the Web of Science were retrieved. Univ Deusto, Univ Rey Juan Carlos and Basque Foundation for Science are the institutions with highest relative priority. The most important research themes are IntellectualDisability, Neck-Pain and Pain

    Software tools for conducting bibliometric analysis in science: An up-to-date review

    Get PDF
    Bibliometrics has become an essential tool for assessing and analyzing the output of scientists, cooperation between universities, the effect of state-owned science funding on national research and development performance and educational efficiency, among other applications. Therefore, professionals and scientists need a range of theoretical and practical tools to measure experimental data. This review aims to provide an up-to-date review of the various tools available for conducting bibliometric and scientometric analyses, including the sources of data acquisition, performance analysis and visualization tools. The included tools were divided into three categories: general bibliometric and performance analysis, science mapping analysis, and libraries; a description of all of them is provided. A comparative analysis of the database sources support, pre-processing capabilities, analysis and visualization options were also provided in order to facilitate its understanding. Although there are numerous bibliometric databases to obtain data for bibliometric and scientometric analysis, they have been developed for a different purpose. The number of exportable records is between 500 and 50,000 and the coverage of the different science fields is unequal in each database. Concerning the analyzed tools, Bibliometrix contains the more extensive set of techniques and suitable for practitioners through Biblioshiny. VOSviewer has a fantastic visualization and is capable of loading and exporting information from many sources. SciMAT is the tool with a powerful pre-processing and export capability. In views of the variability of features, the users need to decide the desired analysis output and chose the option that better fits into their aims

    SPECIAL TRACK: Present and future of research metadata: where do we want to go from here?

    Get PDF
    To steer the scientific system towards specific goals, it is first necessary to develop an effective understanding of all phases and aspects of the research workflow. Research metadata, as the collective record of traces that are generated when scientific activities take place, serves as evidence of these activities. Therefore, the availability of authoritative research metadata is essential for science-related decision-making at various levels. In the past, large-scale research metadata collections mostly dealt with items in the public record, such as bibliographic metadata about academic publications. There used to be few of these large-scale metadata collections, and they were often provided by commercial actors, those which invested the necessary resources to compile and process disperse public information with the goal of turning it into usable services. As the capabilities of available technologies increase, each day more sectors of the scientific system are becoming aware of how their activities could benefit from updating their workflows, a process often referred to as digital transformation. Thus, a plethora of tools and standards are being developed to streamline processes, increase interoperability, and in general overcome the limitations of the paper era. This is having a large effect in the quantity and quality of research metadata that is now being recorded. A clear example of the above is the case of bibliographic metadata. Currently, an increasing number of organizations, spurred by the decreasing barriers to collecting and processing large amounts of bibliographic metadata, are already providing services and datasets that rival the offerings of the traditional commercial providers. Some of these new datasets, provided under open licenses that allow unrestricted reuse and redistribution, have boosted innovation by allowing the development of downstream applications that rely on these metadata collections. However, as scientific activities in general and scientific communication in particular are increasingly moving to the digital space, traditional bibliographic metadata is no longer the only kind of research metadata that is being collected and processed at a large scale to inform decisions. Social network platforms now capture a portion of academic-related conversations and other kinds of interactions. Processes such as peer review that were previously carried out behind closed doors are now being opened, generating their own public trace. Publishing platforms are implementing increasingly sophisticated methods to track and mine user actions for their benefit. All these recent developments call for a discussion on the role of research metadata in the scientific system going forward. This discussion should be open to a large variety of stakeholders, including data providers, scientometricians, academic librarians, higher education institutions, policy managers, and developers of downstream applications. The topics of the contributions to this special track can include: • Analyses of the suitability of research metadata sources for specific use cases • Sustainability and governance of research metadata • Innovations in research metadata • Downstream applications of open research metadata • Surveillance through research metadata Contributions to this special track would be open to everyone interested and peer-reviewed. The format of the session would be 15-20 minutes per presentation, with time for questions after each presentation

    Why do papers from international collaborations get more citations? A bibliometric analysis of Library and Information Science papers

    Get PDF
    Scientific activity has become increasingly complex in recent years. The need for international research collaboration has thus become a common pattern in science. In this current landscape, countries face the problem of maintaining their competitiveness while cooperating with other countries to achieve relevant research outputs. In this international context, publications from international collaborations tend to achieve greater scientific impact than those from domestic ones. To design policies that improve the competitiveness of countries and organizations, it thus becomes necessary to understand the factors and mechanisms that influence the benefits and impact of international research. In this regard, the aim of this study is to confirm whether the differences in impact between international and domestic collaborations are affected by their topics and structure. To perform this study, we examined the Library and Information Science category of the Web of Science database between 2015 and 2019. A science mapping analysis approach was used to extract the themes and their structure according to collaboration type and in the whole category (2015–2019). We also looked for differences in these thematic aspects in top countries and in communities of collaborating countries. The results showed that the thematic factor influences the impact of international research, as the themes in this type of collaboration lie at the forefront of the Library and Information Science category (e.g., technologies such as artificial intelligence and social media are found in the category), while domestic collaborations have focused on more well-consolidated themes (e.g., academic libraries and bibliometrics). Organizations, countries, and communities of countries must therefore consider this thematic factor when designing strategies to improve their competitiveness and collaborate.Spanish Government PID2019-105381GA-I00/AEI/10.13039/50110001103

    Group Decision-Making Based on Artificial Intelligence: A Bibliometric Analysis

    Get PDF
    Decisions concerning crucial and complicated problems are seldom made by a single person. Instead, they require the cooperation of a group of experts in which each participant has their own individual opinions, motivations, background, and interests regarding the existing alternatives. In the last 30 years, much research has been undertaken to provide automated assistance to reach a consensual solution supported by most of the group members. Artificial intelligence techniques are commonly applied to tackle critical group decision-making difficulties. For instance, experts' preferences are often vague and imprecise; hence, their opinions are combined using fuzzy linguistic approaches. This paper reports a bibliometric analysis of the ample literature published in this regard. In particular, our analysis: (i) shows the impact and upswing publication trend on this topic; (ii) identifies the most productive authors, institutions, and countries; (iii) discusses authors' and journals' productivity patterns; and (iv) recognizes the most relevant research topics and how the interest on them has evolved over the years

    The last five years of Big Data Research in Economics, Econometrics and Finance: Identification and conceptual analysis

    Get PDF
    Today, the Big Data term has a multidimensional approach where five main characteristics stand out: volume, velocity, veracity, value and variety. It has changed from being an emerging theme to a growing research area. In this respect, this study analyses the literature on Big Data in the Economics, Econometrics and Finance field. To do that, 1.034 publications from 2015 to 2019 were evaluated using SciMAT as a bibliometric and network analysis software. SciMAT offers a complete approach of the field and evaluates the most cited and productive authors, countries and subject areas related to Big Data. Lastly, a science map is performed to understand the intellectual structure and the main research lines (themes)

    Conceptual structure of federated learning research field

    Get PDF
    Nowadays there are a great amount of data that can be used to train artificial intelligent systems for classification, or prediction purposes. Although there are tons of publicly available data, there are also very valuable data that is private, and therefore, it can not be shared without breaking the data protections laws. For example, hospital data has great value, but it involves persons, so we must try to preserve their privacy rights. Furthermore, although it could be interesting to train a model with the data of only one entity (i.e. a hospital), it could have more value to train the model with the data of several entities. But, since the data of each entity might not be shared, it is not possible to train a global model. In that sense, Federated Learning has emerged as a research field that deals with the training of complex models, without the necessity to share data, and therefore, keeping the data private. In this contribution, we present a global conceptual analysis based on co-words networks of the Federated Learning research field. To do that, the field was delimited using an advance query in Web of Science. The corpus contain a total of 2444 documents. As the main result, it should be highlighted that the Federated Learning research field is focused on six main global areas: telecommunications, privacy and security, computer architecture and data modeling, machine learning, and applications.8 página
    corecore